subject matter expertise
Subject Matter Expertise vs Professional Management in Collective Sequential Decision Making
Shoresh, David, Loewenstein, Yonatan
Your company's CEO is retiring. You search for a successor. You can promote an employee from the company familiar with the company's operations, or recruit an external professional manager. Who should you prefer? It has not been clear how to address this question, the "subject matter expertise vs. professional manager debate", quantitatively and objectively. We note that a company's success depends on long sequences of interdependent decisions, with often-opposing recommendations of diverse board members. To model this task in a controlled environment, we utilize chess - a complex, sequential game with interdependent decisions which allows for quantitative analysis of performance and expertise (since the states, actions and game outcomes are well-defined). The availability of chess engines differing in style and expertise, allows scalable experimentation. We considered a team of (computer) chess players. At each turn, team members recommend a move and a manager chooses a recommendation. We compared the performance of two manager types. For manager as "subject matter expert", we used another (computer) chess player that assesses the recommendations of the team members based on its own chess expertise. We examined the performance of such managers at different strength levels. To model a "professional manager", we used Reinforcement Learning (RL) to train a network that identifies the board positions in which different team members have relative advantage, without any pretraining in chess. We further examined this network to see if any chess knowledge is acquired implicitly. We found that subject matter expertise beyond a minimal threshold does not significantly contribute to team synergy. Moreover, performance of a RL-trained "professional" manager significantly exceeds that of even the best "expert" managers, while acquiring only limited understanding of chess.
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.05)
- North America > United States (0.04)
- Asia > China > Hong Kong (0.04)
Integrating supervised and unsupervised learning approaches to unveil critical process inputs
Papavasileiou, Paris, Giovanis, Dimitrios G., Pozzetti, Gabriele, Kathrein, Martin, Czettl, Christoph, Kevrekidis, Ioannis G., Boudouvis, Andreas G., Bordas, Stéphane P. A., Koronaki, Eleni D.
This study introduces a machine learning framework tailored to large-scale industrial processes characterized by a plethora of numerical and categorical inputs. The framework aims to (i) discern critical parameters influencing the output and (ii) generate accurate out-of-sample qualitative and quantitative predictions of production outcomes. Specifically, we address the pivotal question of the significance of each input in shaping the process outcome, using an industrial Chemical Vapor Deposition (CVD) process as an example. The initial objective involves merging subject matter expertise and clustering techniques exclusively on the process output, here, coating thickness measurements at various positions in the reactor. This approach identifies groups of production runs that share similar qualitative characteristics, such as film mean thickness and standard deviation. In particular, the differences of the outcomes represented by the different clusters can be attributed to differences in specific inputs, indicating that these inputs are critical for the production outcome. Leveraging this insight, we subsequently implement supervised classification and regression methods using the identified critical process inputs. The proposed methodology proves to be valuable in scenarios with a multitude of inputs and insufficient data for the direct application of deep learning techniques, providing meaningful insights into the underlying processes.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > Maryland > Baltimore (0.04)
- (5 more...)
- Energy (0.93)
- Semiconductors & Electronics (0.88)
- Materials > Chemicals (0.88)
Publishers use AI to catch bad scientists doctoring data
Analysis Shady scientists trying to publish bad research may want to think twice as academic publishers are increasingly using AI software to automatically spot signs of data tampering. Duplications of images, where the same picture of a cluster of cells, for example, is copied, flipped, rotated, shifted, or cropped is, unfortunately, quite common. In cases where the errors aren't accidental, the doctored images are created to look as if the researchers have more data and conducted more experiments then they really did. Image duplication was the top reason papers were retracted for the American Association for Cancer Research (AACR) over 2016 to 2020, according to Daniel Evanko, the company's Director of Journal Operations and Systems. Having to retract a paper damages the authors and the publishers' reputation.
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Asia > Middle East > Israel (0.05)
3 Ways That Mathematical Optimization Can Be Used to Improve Machine Learning Applications - Gurobi
My career as a practitioner and researcher in the data science space has spanned more than 30 years, and during that time I have seen a lot of new advanced analytics technologies – which were touted as "the latest and greatest," "cutting-edge," or "game-changing" or another similar superlative – sizzle and then fizzle. The hype cycles (as Gartner calls them) of these technologies were short – as they failed to deliver real-world business impact and attain long-term commercial viability. One advanced analytics technology that bucks that trend and has been around ever since I entered the professional arena in the early 1990s (and actually long before that with the introduction of linear programming in the 1940s) is mathematical optimization. For decades, mathematical optimization has been widely used by companies of all sizes and stripes to address their complex business problems. The secret to mathematical optimization's staying power is that it has consistently demonstrated that it is capable of generating optimal solutions to large-scale, real-world business problems – and has thereby produced significant business value.
Five Questions to Ask Yourself Before Starting Any Data Science Project
While technical skill is undeniably important when approaching any data science effort, there is an art to data science and machine learning that doesn't seem to be discussed as often as pure technical skill is. These more soft skills help a seasoned data scientist navigate through numerous opportunities as seamlessly and efficiently as possible. The fact of the matter is that pretty much every data science effort has its own flair to it that poses unique challenges that may (or frankly, may not) be worth pursuing. Because applied data science always grounds itself in seeking a solution to a real world problem, having subject matter expertise about that real world problem is an absolute must. Of course, it's unreasonable to expect for a data scientist themselves to have that subject matter expertise themselves.
Why subject matter experts must weigh in on AI models
CAPTURING data in today's world is easy. Be it an action in the digital world on a website or an application or in the physical world in a retail or commercial environment -- everything can be tracked. Making sense of that data, however, requires more than just employing data scientists. Founder and Chief of Product Angad Chowdhry told Tech Wire Asia that subject matter expertise is the most important piece of the puzzle when it comes to making sense of data. Chowdhry's company works with a variety of for-profit and non-for-profit businesses, tapping into data from a plethora of sources, and running artificial intelligence (AI)-powered models to answer questions that help better understand markets, invest resources, and plan for the future. Quilt.ai recently collaborated with School of Oriental and African Studies (SOAS) and the Barbican Centre in the UK on a project to help AI understand the context when it sees a photograph.
- Europe > United Kingdom (0.25)
- Asia (0.25)
The Future of Human In The Loop
Since the 1980's, human/machine interactions, and human-in-the-loop (HTL) scenarios in particular, have been systematically studied. It was often predicted that with an increase in automation, less human-machine interaction would be needed over time. Human input is still relied upon for most common forms of AI/ML training, and often even more human insight is required than ever before. As AI/ML evolves and baseline accuracy of models improves, the type of human interaction required will change from creation of generalized ground truth from scratch, to human review of the worst-performing ML predictions in order to improve and fine-tune models iteratively and cost-effectively. Deep learning algorithms thrive on labeled data and can be improved progressively if more training data is added over time.
Data Science and the Data Scientist – Simply Put.
Machine Learning (a type of artificial intelligence (AI) that allows software applications to become more accurate in predicting outcomes without being explicitly programmed4) is the intersection between Computer Science, and Math and Statistics and all you need to know are the input variables and how to interpret the output. Machine Learning Algorithms are created using various techniques and programming machine learning algorithms is the job of a Data Scientist. Traditional Research is the intersection of Math and Statistics, and Subject Matter Expertise. This is the kind of research that is not dependent on any technology. Traditional Software is the intersection between Subject Matter Expertise and Computer Science.
Five AI Startup Predictions for 2017
With AI in a full-fledged mania, 2017 will be the year of reckoning. Pure hype trends will reveal themselves to have no fundamentals behind them. Paradoxically, 2017 will also be the year of breakout successes from a handful of vertically-oriented AI startups solving full-stack industry problems that require subject matter expertise, unique data, and a product that uses AI to deliver its core value proposition. Over the past year a mania has risen up around'bots.' In the technical community, when we talk about bots, we usually mean software agents which tend to be defined by "four key notions that distinguish agents from arbitrary programs; reaction to the environment, autonomy, goal-orientation and persistence." Enterprises have decided to usurp the term'bot' to be mean'any form of business process automation' and create the term'RPA', robotic process automation.
- North America > United States > California (0.04)
- Asia > China (0.04)
- Information Technology (1.00)
- Banking & Finance (0.95)
"Your Expertise Is No Longer Needed" - Sincerely, DEEP Learning.
There is a trend happening right now in machine learning where subject matter expertise is being replaced. Approaches that previously required a subject expert now have naive approaches that are beating the world's best experts. What the smartest and brightest experts know, which was previously respected, in some cases offers minimal to no value now. One of the major problems that most people see when dealing with natural language processing problems (NLP) is the issue of sparsity. The words that are being picked up in the documents, social stream, or whatever feed you care about are too unique.